Insight Modeling

 

"The purpose of computing is insight, not numbers," said Richard Hamming. Ditto for any representation of reality! As beautifully explained by section 3.4 of the 2009 JASON report "Rare Events":

The main use of models
in science is to develop
intuition for hard problems.

In greater detail:

Predictive models are not the only kind of models scientists use. Insight models are used to build expert intuition — such as visualizing complex datasets, or just helping to modularize and structure the steps in a mental model of a problem. Predictive mathematical modeling is the most scientifically demanding way in which models are used, but it is probably not the main use of models in science. The main use of models in science is to develop intuition for hard problems. Models are used to illustrate, visualize, and analyze a problem, to help human experts see patterns in data, and to systematize an expert's thinking in a way that might reveal key gaps in knowledge about the problem.

An insight model need not be complicated. A simple systematic cartoon on a napkin may suddenly reveal a missing facet of a problem. Other models may be complicated. A red-team exercise may reveal an unanticipated vulnerability; an agent-based simulation may help illustrate inefficiencies and bottlenecks in resource allocation; a social network analysis may help clearly visualize a pattern of connections between people in a large dataset.

Experts develop their own ways of organizing and viewing their data as they think about a problem — such as drawing cartoons showing relationships, or developing a personal system of archiving and indexing data. Experts develop these models for themselves, and they learn from the experience of other experts in their field. Because experts spend most of their time doing their job rather than developing new tools, there is good reason to fund free-standing research and development projects into new (insight) models.

From a programmatic standpoint of funding research, the main problem with standalone research projects that aim to create new (insight) models is that they separate the model's creator from the model's user community, so they tend to face an adoption barrier. Experts are rightly skeptical of new tools developed by non-experts, especially if a model appears complex, mathematical, and highly abstracted rather than hewing closely to real-world data analysis needs. Success of an insight tool should ultimately be judged by how many experts use it and find it indispensable in their work. "Useful to experts" necessarily includes many factors that become just as important as the scientific validity of the model — issues such as software quality and usability, in the case of computer models. Therefore an important part of any research plan to develop new models is the researchers' plan for collaboration and adoption by experts. Will the tool be used and evaluated by real-world analysts? Do they find it useful? Will it spread to other analysts if it is successful? ...

(cf Research and Life (2000-09-07), Great Thoughts Time (2013-11-29), Tolerate Ambiguity (2018-02-01), ...) - ^z - 2019-12-31